超声使用是因为其成本低,非电离和非侵入性特征,并且已成为基石放射学检查。超声应用程序的研究也扩大了,尤其是通过机器学习的图像分析。但是,超声数据通常仅限于封闭的数据集,只有少数几个公开可用。尽管经常检查器官,但肾脏缺乏公开可用的超声数据集。拟议的开放肾脏超声数据集是第一套公开可用的肾脏B模式超声数据,其中包括用于多级语义分段的注释。它基于5年以上500多名患者的回顾性收集的数据,平均年龄为53.2 +/- 14。7年,体重指数为27.0 +/- 5.4 kg/m2,最常见的原发性疾病是糖尿病,IgA肾病和高血压。有两位专家超声师的视图标签和细粒度的手动注释。值得注意的是,该数据包括天然和移植的肾脏。进行了初始的基准测量测量,证明了一种最先进的算法,该算法达到了肾脏胶囊的骰子Sorenson系数为0.74。该数据集是一个高质量的数据集,包括两组专家注释,图像比以前可用的更大。为了增加获得肾脏超声数据的访问,未来的研究人员可能能够创建用于组织表征,疾病检测和预后的新型图像分析技术。
translated by 谷歌翻译
建模超声斑点对其表征组织特性的能力引起了极大的兴趣。由于斑点取决于潜在的组织结构,因此对其进行建模可能有助于分割或疾病检测等任务。但是,对于通常用于研究功能障碍的移植肾脏,目前尚不清楚哪个统计分布最能表征这种斑点。对于移植肾脏的区域而言,尤其如此:皮质,髓质和中央回声复合物。此外,目前尚不清楚这些分布如何因患者变量(例如年龄,性别,体重指数,原发性疾病或供体类型)而有所不同。这些特征可能会影响斑点建模,鉴于它们对肾脏解剖结构的影响。我们是第一个调查这两个目标的人。 n = 821肾移植受者B模式图像自动使用神经网络自动分段到皮质,髓质和中央回声复合物中。每个区域都安装了七个不同的概率分布。雷利和中族分布的模型参数在这三个区域之间有显着差异(p <= 0.05)。虽然两者都具有极好的合身性,但中田族具有更高的Kullbeck-Leibler Divergence。受体年龄与皮质中的尺度弱相关(Omega:Rho = 0.11,p = 0.004),而体重指数与髓质中的形状微弱相关(M:RHO = 0.08,p = 0.04)。性别,原发性疾病和供体类型均未表现出任何相关性。我们提出,根据我们的发现,中纳卡米分布可用于表征区域性的移植肾脏和大多数患者特征。
translated by 谷歌翻译
While prior work has established that the use of parallel data is conducive for cross-lingual learning, it is unclear if the improvements come from the data itself, or if it is the modeling of parallel interactions that matters. Exploring this, we examine the usage of unsupervised machine translation to generate synthetic parallel data, and compare it to supervised machine translation and gold parallel data. We find that even model generated parallel data can be useful for downstream tasks, in both a general setting (continued pretraining) as well as the task-specific setting (translate-train), although our best results are still obtained using real parallel data. Our findings suggest that existing multilingual models do not exploit the full potential of monolingual data, and prompt the community to reconsider the traditional categorization of cross-lingual learning approaches.
translated by 谷歌翻译
Logical reasoning of text is an important ability that requires understanding the information present in the text, their interconnections, and then reasoning through them to infer new conclusions. Prior works on improving the logical reasoning ability of language models require complex processing of training data (e.g., aligning symbolic knowledge to text), yielding task-specific data augmentation solutions that restrict the learning of general logical reasoning skills. In this work, we propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities. We select a subset of Wikipedia, based on a set of logical inference keywords, for continued pretraining of a language model. We use two self-supervised loss functions: a modified masked language modeling loss where only specific parts-of-speech words, that would likely require more reasoning than basic language understanding, are masked, and a sentence-level classification loss that teaches the model to distinguish between entailment and contradiction types of sentences. The proposed training paradigm is both simple and independent of task formats. We demonstrate the effectiveness of APOLLO by comparing it with prior baselines on two logical reasoning datasets. APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
translated by 谷歌翻译
Autonomous vehicles are being deployed with a spectrum of capability, extending from driver assistance features for the highway in personal vehicles (SAE Level 2+) to fully autonomous fleet ride sharing services operating in complex city environments (SAE Level 4+). This spectrum of autonomy often operates in different physical environments with different degrees of assumed driver in-the-loop oversight and hence have very different system and subsystem requirements. At the heart of SAE Level 2 to 5 systems is localization and mapping, which ranges from road determination for feature geofencing or high-level routing, through lane determination for advanced driver assistance, to where-in-lane positioning for full vehicle control. We assess localization and mapping requirements for different levels of autonomy and supported features. This work provides a framework for system decomposition, including the level of redundancy needed to achieve the target level of safety. We examine several representative autonomous and assistance features and make recommendations on positioning requirements as well map georeferencing and information integrity.
translated by 谷歌翻译
Computational catalysis is playing an increasingly significant role in the design of catalysts across a wide range of applications. A common task for many computational methods is the need to accurately compute the minimum binding energy - the adsorption energy - for an adsorbate and a catalyst surface of interest. Traditionally, the identification of low energy adsorbate-surface configurations relies on heuristic methods and researcher intuition. As the desire to perform high-throughput screening increases, it becomes challenging to use heuristics and intuition alone. In this paper, we demonstrate machine learning potentials can be leveraged to identify low energy adsorbate-surface configurations more accurately and efficiently. Our algorithm provides a spectrum of trade-offs between accuracy and efficiency, with one balanced option finding the lowest energy configuration, within a 0.1 eV threshold, 86.63% of the time, while achieving a 1387x speedup in computation. To standardize benchmarking, we introduce the Open Catalyst Dense dataset containing nearly 1,000 diverse surfaces and 87,045 unique configurations.
translated by 谷歌翻译
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, existing methods show poor accuracy in dynamic scenes, and the estimated depth map is blurred at object boundaries because they are usually occluded in other training views. In this paper, we propose SC-DepthV3 for addressing the challenges. Specifically, we introduce an external pretrained monocular depth estimation model for generating single-image depth prior, namely pseudo-depth, based on which we propose novel losses to boost self-supervised training. As a result, our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes. We demonstrate the significantly superior performance of our method over previous methods on six challenging datasets, and we provide detailed ablation studies for the proposed terms. Source code and data will be released at https://github.com/JiawangBian/sc_depth_pl
translated by 谷歌翻译
Systems for person re-identification (ReID) can achieve a high accuracy when trained on large fully-labeled image datasets. However, the domain shift typically associated with diverse operational capture conditions (e.g., camera viewpoints and lighting) may translate to a significant decline in performance. This paper focuses on unsupervised domain adaptation (UDA) for video-based ReID - a relevant scenario that is less explored in the literature. In this scenario, the ReID model must adapt to a complex target domain defined by a network of diverse video cameras based on tracklet information. State-of-art methods cluster unlabeled target data, yet domain shifts across target cameras (sub-domains) can lead to poor initialization of clustering methods that propagates noise across epochs, thus preventing the ReID model to accurately associate samples of same identity. In this paper, an UDA method is introduced for video person ReID that leverages knowledge on video tracklets, and on the distribution of frames captured over target cameras to improve the performance of CNN backbones trained using pseudo-labels. Our method relies on an adversarial approach, where a camera-discriminator network is introduced to extract discriminant camera-independent representations, facilitating the subsequent clustering. In addition, a weighted contrastive loss is proposed to leverage the confidence of clusters, and mitigate the risk of incorrect identity associations. Experimental results obtained on three challenging video-based person ReID datasets - PRID2011, iLIDS-VID, and MARS - indicate that our proposed method can outperform related state-of-the-art methods. Our code is available at: \url{https://github.com/dmekhazni/CAWCL-ReID}
translated by 谷歌翻译
Human and robot partners increasingly need to work together to perform tasks as a team. Robots designed for such collaboration must reason about how their task-completion strategies interplay with the behavior and skills of their human team members as they coordinate on achieving joint goals. Our goal in this work is to develop a computational framework for robot adaptation to human partners in human-robot team collaborations. We first present an algorithm for autonomously recognizing available task-completion strategies by observing human-human teams performing a collaborative task. By transforming team actions into low dimensional representations using hidden Markov models, we can identify strategies without prior knowledge. Robot policies are learned on each of the identified strategies to construct a Mixture-of-Experts model that adapts to the task strategies of unseen human partners. We evaluate our model on a collaborative cooking task using an Overcooked simulator. Results of an online user study with 125 participants demonstrate that our framework improves the task performance and collaborative fluency of human-agent teams, as compared to state of the art reinforcement learning methods.
translated by 谷歌翻译
事件传感是生物启发的飞行指导和控制系统中的主要组成部分。我们探讨了事件摄像机在腹侧着陆期间与表面进行时间接触(TTC)的用法。这是通过估计差异(逆TTC)的差异来实现的,即径向光流的速率,是从着陆期间产生的事件流。我们的核心贡献是针对基于事件的差异估计的一种新颖的对比度最大化公式,以及一种分支和结合算法,可准确地最大化对比度并找到最佳的差异值。进行GPU加速度以加快全球算法。另一个贡献是一个新的数据集,其中包含来自腹面着陆的真实事件流,该数据集用于测试和基准我们的方法。由于全局优化,与其他启发式差异估计器或基于事件的光流方法相比,我们的算法更有能力恢复真正的分歧。随着GPU加速,我们的方法还可以实现竞争性的运行时间。
translated by 谷歌翻译